DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 52

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
On Homophony and Rényi Entropy ...
BASE
Show details
3
On Homophony and Rényi Entropy ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
6
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
7
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
8
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
9
Modeling the Unigram Distribution ...
BASE
Show details
10
A Bayesian Framework for Information-Theoretic Probing ...
Abstract: Anthology paper link: https://aclanthology.org/2021.emnlp-main.229/ Abstract: Pimentel et al. (2020) recently analysed probing from an information-theoretic perspective. They argue that probing should be seen as approximating a mutual information. This led to the rather unintuitive conclusion that representations encode exactly the same information about a target task as the original sentences. The mutual information, however, assumes the true probability distribution of a pair of random variables is known, leading to unintuitive results in settings where it is not. This paper proposes a new framework to measure what we term Bayesian mutual information, which analyses information from the perspective of Bayesian agents -- allowing for more intuitive findings in scenarios with finite data. For instance, under Bayesian MI we have that data can add information, processing can help, and information can hurt, which makes it more intuitive for machine learning applications. Finally, we apply our framework to ...
Keyword: Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
URL: https://underline.io/lecture/37413-a-bayesian-framework-for-information-theoretic-probing
https://dx.doi.org/10.48448/gnht-ez32
BASE
Hide details
11
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
12
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
13
What About the Precedent: An Information-Theoretic Analysis of Common Law ...
BASE
Show details
14
Modeling the Unigram Distribution ...
BASE
Show details
15
Finding Concept-specific Biases in Form–Meaning Associations ...
BASE
Show details
16
How (Non-)Optimal is the Lexicon? ...
BASE
Show details
17
Disambiguatory Signals are Stronger in Word-initial Positions ...
BASE
Show details
18
Modeling the Unigram Distribution
In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (2021)
BASE
Show details
19
What About the Precedent: An Information-Theoretic Analysis of Common Law
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
20
Finding Concept-specific Biases in Form–Meaning Associations
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details

Page: 1 2 3

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
52
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern